Large deep learning models have achieved remarkable success in many scenarios. However, training large models is usually challenging, e.g., due to the high computational cost, the unstable and painfully slow optimization procedure, and the vulnerability to overfitting. To alleviate these problems, this work studies a divide-and-conquer strategy, i.e., dividing a large model into smaller modules, training them independently, and reassembling the trained modules to obtain the target model. This approach is promising since it avoids directly training large models from scratch. Nevertheless, implementing this idea is non-trivial, as it is difficult to ensure the compatibility of the independently trained modules. In this paper, we present an elegant solution to address this issue, i.e., we introduce a global, shared meta model to implicitly link all the modules together. This enables us to train highly compatible modules that collaborate effectively when they are assembled together. We further propose a module incubation mechanism that enables the meta model to be designed as an extremely shallow network. As a result, the additional overhead introduced by the meta model is minimalized. Though conceptually simple, our method significantly outperforms end-to-end (E2E) training in terms of both final accuracy and training efficiency. For example, on top of ViT-Huge, it improves the accuracy by 2.7% compared to the E2E baseline on ImageNet-1K, while saving the training cost by 43% in the meantime. Code is available at https://github.com/LeapLabTHU/Model-Assembling.
translated by 谷歌翻译
随着信息技术的快速发展,在线平台(例如,新闻门户网站和社交媒体)每时每刻都会产生巨大的网络信息。因此,从社会流中提取结构化的事件表现至关重要。通常,现有事件提取研究利用模式匹配,机器学习或深度学习方法来执行事件提取任务。然而,由于汉语的独特特征,中国事件提取的表现并不像英语一样好。在本文中,我们提出了一个综合框架来执行中文事件提取。所提出的方法是一个多通道输入神经框架,它集成了语义特征和语法特征。 BERT架构捕获语义特征。通过分析嵌入嵌入和图形卷积网络(GCN)分别捕获语音(POS)特征和依赖解析(DP)特征的部分。我们还在真实世界数据集中评估我们的模型。实验结果表明,该方法显着优于基准方法。
translated by 谷歌翻译
原油价格预测研究由于其对全球经济的重大影响,从学者和政策制定者引起了巨大的关注。除供需外,原油价格在很大程度上受到各种因素的影响,如经济发展,金融市场,冲突,战争和政治事件。最先前的研究将原油价格预测视为时间序列或计量计量的可变预测问题。虽然最近已经考虑了考虑实时新闻事件的影响,但大多数作品主要使用原始新闻头条或主题模型来提取文本功能,而不会深刻探索事件信息。在这项研究中,提出了一种新的原油价格预测框架,Agesl,用于处理这个问题。在我们的方法中,利用开放域事件提取算法提取底层相关事件,并且文本情绪分析算法用于从大规模新闻中提取情绪。然后,一系列深度神经网络集成了新闻事件特征,感情特征和历史价格特征,以预测未来原油价格。实证实验是在西德克萨斯中间体(WTI)原油价格数据上进行的,结果表明,与几种基准方法相比,我们的方法获得了卓越的性能。
translated by 谷歌翻译
在线新闻建议的一个关键挑战是帮助用户找到他们感兴趣的文章。传统新闻推荐方法通常使用单一新闻信息,这不足以编码新闻和用户表示。最近的研究使用多个频道新闻信息,例如标题,类别和机构,增强新闻和用户表示。然而,这些方法仅使用各种注意机制来熔化多视图嵌入,而不考虑上下文中包含的深度挖掘更高级别的信息。这些方法编码了在Word级别的新闻内容并共同培训了推荐网络中的注意参数,导致培训模型所需的更多Coreas。我们提出了一个事件提取的新闻推荐(EENR)框架,以克服这些缺点,利用事件提取到抽象的更高级别信息。 Eenr还使用两级策略来减少推荐网络后续部分的参数。我们在第一阶段通过外部语料库训练事件提取模块,并将训练型模型应用于新闻推荐数据集,以预测第二阶段的事件级信息,包括事件类型,角色和参数,包括事件类型,角色和参数。然后我们保险熔断多个频道信息,包括活动信息,新闻标题和类别,以编码新闻和用户。对现实世界数据集的广泛实验表明,我们的EENR方法可以有效地提高新闻建议的性能。最后,我们还探讨了利用更高抽象级别信息来替代新闻身体内容的合理性。
translated by 谷歌翻译
社交媒体平台可能为包含仇恨语音的话语提供潜在的空间,甚至更糟糕,可以充当仇恨犯罪的传播机制。联邦调查局的统一犯罪报告(UCR)计划收集仇恨犯罪数据并每年发布统计报告。这些统计数据提供了确定国家仇恨犯罪趋势的信息。统计数据还可以为执法机构提供有价值的整体和战略洞察力,或证明法律制造者为具体的立法。但是,该报告主要在明年发布,落后于许多即时需求。最近的研究主要侧重于社会媒体文本或对确诊犯罪影响的实证研究中的仇恨语音检测。本文提出了一个框架,首先利用文本采矿技术从纽约时报新闻中提取仇恨犯罪事件,然后利用结果促进预测美国国家一级和国家级仇恨犯罪趋势。实验结果表明,随着时间序列或回归方法,我们的方法可以显着提高预测性能,而无需事件相关的因素。我们的框架拓宽了国家级和国家级仇恨犯罪趋势预测的方法。
translated by 谷歌翻译
随着信息技术的快速发展,在线平台已经产生了巨大的文本资源。作为一种特定形式的信息提取(即),事件提取(EE)由于其自动从人类语言提取事件的能力而增加了普及。但是,事件提取有限的文献调查。现有审查工作要么花费很多努力,用于描述各种方法的细节或专注于特定领域。本研究提供了全面概述了最先进的事件提取方法及其从文本的应用程序,包括闭域和开放式事件提取。这项调查的特点是它提供了适度复杂性的概要,避免涉及特定方法的太多细节。本研究侧重于讨论代表作品的常见角色,应用领域,优势和缺点,忽略各个方法的特殊性。最后,我们总结了常见问题,当前解决方案和未来的研究方向。我们希望这项工作能够帮助研究人员和从业者获得最近的事件提取的快速概述。
translated by 谷歌翻译
无线技术的最新进步使连接的自动驾驶汽车(CAV)能够通过车辆到车辆(V2V)通信收集有关其环境的信息。在这项工作中,我们为CAVS设计了基于信息共享的多代理增援学习(MARL)框架,以在做出决定以提高交通效率和安全性时利用额外的信息。我们提出的安全参与者批评算法有两种新技术:截断的Q功能和安全动作映射。截断的Q功能利用了来自相邻骑士的共享信息,以使Q-功能的联合状态和动作空间在我们的算法中不会在大型CAV系统中生长。我们证明了截短Q和全局Q函数之间近似误差的结合。安全的操作映射为基于控制屏障功能的培训和执行提供了可证明的安全保证。我们使用CARLA模拟器进行实验,我们表明我们的方法可以在不同的CAV比和不同的交通密度下的平均速度和舒适性方面提高CAV系统的效率。我们还表明,我们的方法避免执行不安全的动作,并始终保持与其他车辆的安全距离。我们构建了一个障碍物的场景,以表明共同的愿景可以帮助骑士早些时候观察障碍,并采取行动避免交通拥堵。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译